-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Add enable_profiling in runoptions #26846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds per-run profiling capability to ONNX Runtime by introducing enable_profiling and profile_file_prefix options to RunOptions. This allows users to enable profiling for individual inference runs independent of session-level profiling, providing more granular control over performance analysis.
Key changes:
- Added
enable_profilingandprofile_file_prefixfields to RunOptions structure - Modified execution providers to accept an
enable_profilingparameter inGetProfiler()method - Enhanced timestamp formatting to include milliseconds for more precise profiling file naming
Reviewed changes
Copilot reviewed 19 out of 19 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| include/onnxruntime/core/framework/run_options.h | Added enable_profiling flag and profile_file_prefix configuration |
| onnxruntime/python/onnxruntime_pybind_state.cc | Exposed new profiling options to Python API |
| onnxruntime/core/session/inference_session.cc | Implemented run-level profiler creation, initialization, and lifecycle management |
| include/onnxruntime/core/framework/execution_provider.h | Updated GetProfiler signature to accept enable_profiling parameter |
| onnxruntime/core/providers/cuda/cuda_execution_provider.h/cc | Updated GetProfiler implementation for CUDA provider |
| onnxruntime/core/providers/vitisai/vitisai_execution_provider.h/cc | Updated GetProfiler implementation for VitisAI provider |
| onnxruntime/core/providers/webgpu/webgpu_execution_provider.h/cc | Implemented session vs run profiler separation using thread_local storage |
| onnxruntime/core/providers/webgpu/webgpu_context.h/cc | Added profiler registration/unregistration and multi-profiler event collection |
| onnxruntime/core/providers/webgpu/webgpu_profiler.cc | Updated to register/unregister with context and handle event collection |
| onnxruntime/core/common/profiler.h/cc | Added overloaded Start and EndTimeAndRecordEvent methods accepting explicit timestamps |
| onnxruntime/core/framework/utils.h/cc | Propagated run_profiler parameter through execution graph functions |
| onnxruntime/core/framework/sequential_executor.h/cc | Added run_profiler support in SessionScope and KernelScope for dual profiling |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
828938d to
0022eb0
Compare
1fa65ff to
978b59a
Compare
978b59a to
c48efdb
Compare
yuslepukhin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🕐
aa5c138 to
5f0a9aa
Compare
|
Please, comment on all of the Copilot issues before resolving them. |
| session_state_.Profiler().EndTimeAndRecordEvent(profiling::SESSION_EVENT, "SequentialExecutor::Execute", session_start_); | ||
| } else if (run_profiling_enabled) { | ||
| StopEvent(profiling::SESSION_EVENT, "SequentialExecutor::Execute", session_start_); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we wrap this into a function StopProfilingIfEnabled()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added StopProfilingIfEnabled and StartProfilingIfEnabled as suggested. Done!
Description
Support run-level profiling
This PR adds support for profiling individual Run executions, similar to session-level profiling. Developers can enable run-level profiling by setting
enable_profilingandprofile_file_prefixin RunOptions. Once the run completes, a JSON profiling file will be saved using profile_file_prefix + timestamp.Key Changes
run_profilerinInferenceSession::Run, which is destroyed after the run completes. Using a dedicated profiler per run ensures that profiling data is isolated and prevents interleaving or corruption across runs.StartandEndTimeAndRecordEventfunctions have been added. These allow the caller to provide timestamps instead of relying onstd::chrono::high_resolution_clock::now(), avoiding potential timing inaccuracies.tls_run_profiler_to support run-level profiling with WebGPU Execution Provider (EP). This ensures that when multiple threads enable run-level profiling, each thread logs only to its own WebGPU profiler, keeping thread-specific data isolated.HH:MM:SS.mminstead ofHH:MM:SSin the JSON filename to prevent conflicts when profiling multiple consecutive runs.Motivation and Context
Previously, profiling only for session level. Sometimes developer want to profile for specfic run . so the PR comes.
Some details
When profiling is enabled via RunOptions, it should ideally collect two types of events:
Used to calculate the CPU execution time of each operator.
Used to measure GPU kernel execution time.
Unlike session-level, we need to ensure the collecting events is correct for multiple thread scenario.
For 1, this can be supported easily(sequential_executor.cc). We use a thread-local storage (TLS) variable, RunLevelState (defined in profiler.h), to maintain run-level profiling state for each thread.
For 2, each Execution Provider (EP) has its own profiler implementation, and each EP must ensure correct behavior under run-level profiling. This PR ensures that the WebGPU profiler works correctly with run-level profiling.
Test Cases
sess1.Run({ enable_profiling: true })t2:
sess1.Run({ enable_profiling: false })t3:
sess1.Run({ enable_profiling: true })t1and one fort3.sess1 = OrtSession({ enable_profiling: true })sess1.Run({ enable_profiling: true })